34 research outputs found

    Reliability Analysis of Component-Based Systems with Multiple Failure Modes

    Get PDF
    This paper presents a novel approach to the reliability modeling and analysis of a component-based system that allows dealing with multiple failure modes and studying the error propagation among components. The proposed model permits to specify the components attitude to produce, propagate, transform or mask different failure modes. These component-level reliability specifications together with information about systems global structure allow precise estimation of reliability properties by means of analytical closed formulas, probabilistic modelchecking or simulation methods. To support the rapid identification of components that could heavily affect systems reliability, we also show how our modeling approach easily support the automated estimation of the system sensitivity to variations in the reliability properties of its components. The results of this analysis allow system designers and developers to identify critical components where it is worth spending additional improvement efforts

    Assuring Design Diversity in N-Version Software: A Design Paradigm for N-Version Programming

    No full text
    The N-Version Programming (NVP) approach achieves fault-tolerant software units, called N-Version Software (NVS) units, through the development and use of software diversity. To maximize the effectiveness of the NVP approach, the probability of similar errors that coincide at the NVS decision points should be reduced to the lowest possible value. Design diversity is potentially an effective method to get this result. It has been the major concern of this paper to formulate a set of rigorous guidelines, or a design paradigm, for the investigation and implementation of design diversity in building NVS units for practical applications. This effort includes the description of a most recent formulation of the NVP design paradigm, which integrates the knowledge and experience obtained from fault-tolerant system design with software engineering techniques, and the application of this design paradigm to a real-world project for an extensive evaluation. Some limitations of the approach are ..

    Compiler-support for robust multi-core computing

    Get PDF
    “The original publication is available at www.springerlink.com”. Copyright Springer.Embedded computing is characterised by the limited availability of computing resources. Further, embedded systems are often used in safety-critical applications with real-time constraints. Thus, the software development has to follow rigorous procedures to minimise the risk of system failures. However, besides the inherent application complexities, there is also an increased technology-based complexity due to the shift to concurrent programming of multi-core systems. For such systems it is quite challenging to develop safe and resource-efficient systems. In this paper we give a plea for the need of better software development tools to cope with this challenge. For example, we outline how compilers can help to simplify the writing of fault-tolerant and robust software, which keeps the application code more compact, comprehensive, and maintainable. We take a rather extreme stand by promoting a functional programming approach. This functional programming paradigm reduces the complexity of program analysis and thus allows for more efficient and powerful techniques. We will implement an almost transparent support for robustness within the SaC research compiler, which accepts a C-like functional program as input. Compared to conventional approaches in the field of automatic software-controlled resilience, our functional setting will allow for lower overhead, making the approach interesting for embedded computing as well as for high-performance computing

    Evaluation of a Dependability Mechanism for Cyber Physical Systems

    Get PDF
    Cyber-Physical-Systems (CPS) are systems of collaborating computational entities. Concepts such as autonomous cars, smart electric grid, implanted medical devices and smart manufacturing are some practical examples of CPS. However, the open and cooperative nature of CPS poses a significant new challenge in assuring dependability. The DEIS project addresses this important and unsolved challenge by developing technologies that facilitate the efficient synthesis of components and systems based on their dependability information. The key innovation that is the aim of DEIS is the corresponding concept of a Digital Dependability Identity (DDI). A DDI contains all the information that uniquely describes the dependability characteristics of a CPS or CPS component. DDIs are synthesised at development time and are the basis for the (semi-)automated integration of components into systems during development, as well as for the fully automated dynamic integration of systems into systems of systems in the field. In this paper we present an overview of the DDI. Additionally, we provide metrics for evaluating the DDI’s impact on CPS dependability, and the results of an evaluation of the DDI’s impact on dependability in four CPS industrial systems. These results demonstrate the positive impact of the DDI on the dependability of CPS

    Obama's Victory of Hope over Hate

    No full text
    The use of virtualization in high-performance computing (HPC) has been suggested as a means to provide tailored services and added functionality that many users expect from full-featured Linux cluster environments. The use of virtual machines in HPC can offer several benefits, but maintaining performance is a crucial factor. In some instances the performance criteria are placed above the isolation properties. This selective relaxation of isolation for performance is an important characteristic when considering resilience for HPC environments that employ virtualization. In this paper we consider some of the factors associated with balancing performance and isolation in configurations that employ virtual machines. In this context, we propose a classification of errors based on the concept of “error zones”, as well as a detailed analysis of the trade-offs between resilience and performance based on the level of isolation provided by virtualization solutions. Finally, a set of experiments are performed using different virtualization solutions to elucidate the discussion

    Smart cities: a taxonomy for the efficient management of lighting in unpredicted environments

    No full text
    In recent years there has been a substantial increase in the number of outdoor lighting installations, the energy management of this has not been greatly improved and electricity consumption has skyrocketed. Most of it does not come from renewable energies with all the negative effects that this entails. With all this, public lighting can represent up to a total of 54% of the energy consumption of a municipality and up to 61% of its electricity consumption. This work focuses on the analysis of the factors to consider in the implementation and application of a lighting control system in a real environment for energy saving. The system should be based on the collection of data by the different sensors installed in the luminaries of the route oriented to the environment of the Smart Cities and the Intelligent Transport Systems (ITS). The main objective is to try to reduce the consumption of electrical energy as much as possible while maintaining the comfort that the road user feels in it. For this, the weak points of these systems will be searched and their elimination will be sought. A study will be made of the situation of the systems available today. The characteristics of these systems will be analysed. Based on the characteristics of the systems analysed, the necessary requirements of the system presented will be determined. The characteristics that will make this project different from the rest will be established. An architecture proposal that seeks to optimise the parameters analysed will be presented.Ávil
    corecore